USTC and the Fengshenbang team of the IDEA Institute have released ChiMed-GPT, a large model in the Chinese medical field. This model is based on the Ziya2-13B model, featuring 13 billion parameters. Through comprehensive pre-training, fine-tuning, and reinforcement learning with human feedback, it meets the needs of medical text processing. ChiMed-GPT outperforms other open-source models in tasks such as medical information extraction, question answering, and dialogue generation. This model holds significant importance for enhancing medical intelligence, effectively handling medical text data and responding to patient inquiries.